Skip to content

Conversation

@lstein
Copy link
Collaborator

@lstein lstein commented Oct 22, 2022

To add a VAE autoencoder to an existing model:

  1. Download the appropriate autoencoder and put it into models/ldm/stable-diffusion

    Note that you MUST use a VAE that was written for the original CompViz Stable Diffusion codebase. For v1.4, that would be the file named vae-ft-mse-840000-ema-pruned.ckpt that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

  2. Edit config/models.yaml to contain the following stanza, modifying weights and vae as required to match the weights and vae model file names. There is no requirement to rename the VAE file.

stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
  1. Alternatively from within the invoke.py CLI, you may use the command !editmodel stable-diffusion-1.4 to bring up a simple editor that will allow you to add the path to the VAE.

  2. If you are just installing InvokeAI for the first time, you can also use !import_model models/ldm/stable-diffusion/sd-v1.4.ckpt instead to create the configuration from scratch.

  3. That's it!

To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
@tildebyte
Copy link
Contributor

tildebyte commented Oct 22, 2022

FFS, I just realized that I'm trying to use the Diffusers version... I'll report back after I've given myself an appropriate punishment 😁

Copy link
Contributor

@tildebyte tildebyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@lstein lstein merged commit 51fdbe2 into development Oct 22, 2022
@lstein lstein deleted the model-vae-loading branch October 22, 2022 23:27
@yadamonk
Copy link

Thanks for adding this so quickly. I just wanted to mention that even v1-5-pruned-emaonly.ckpt benefits from using the better vae-ft-mse-840000-ema-pruned.ckpt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants